skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Choi, Bryan H"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper addresses the challenges of computational accountability in autonomous systems, particularly in Autonomous Vehicles (AVs), where safety and efficiency often conflict. We begin by examining current approaches such as cost minimization, reward maximization, human-centered approaches, and ethical frameworks, noting their limitations addressing these challenges. Foreseeability is a central concept in tort law that limits the accountability and legal liability of an actor to a reasonable scope. Yet, current data-driven methods to determine foreseeability are rigid, ignore uncertainty, and depend on simulation data. In this work, we advocate for a new computational approach to establish foreseeability of autonomous systems based on the legal “BPL” formula. We provide open research challenges, using fully autonomous vehicles as a motivating example, and call for researchers to help autonomous systems make accountable decisions in safety-critical scenarios. 
    more » « less
    Free, publicly-accessible full text available April 11, 2026
  2. This paper addresses the challenges of computational accountability in autonomous systems, particularly in Autonomous Vehicles (AVs), where safety and efficiency often conflict. We begin by examining current approaches such as cost minimization, reward maximization, human-centered approaches, and ethical frameworks, noting their limitations addressing these challenges. Foreseeability is a central concept in tort law that limits the accountability and legal liability of an actor to a reasonable scope. Yet, current data-driven methods to determine foreseeability are rigid, ignore uncertainty, and depend on simulation data. In this work, we advocate for a new computational approach to establish foreseeability of autonomous systems based on the legal “BPL” formula. We provide open research challenges, using fully autonomous vehicles as a motivating example, and call for researchers to help autonomous systems make accountable decisions in safety-critical scenarios. 
    more » « less
    Free, publicly-accessible full text available April 11, 2026
  3. The National Institute of Standards and Technology (NIST) has become a beacon of hope for those who trust in federal standards for software and AI safety. Moreover, lawmakers and commentators have indicated that compliance with NIST standards ought to shield entities from liability. With more than a century of expertise in scientific research and standard-setting, NIST would seem to be uniquely qualified to develop such standards. But as I argue in this paper, this faith is misplaced. NIST’s latest forays in risk management frameworks disavow concrete metrics or outcomes, and solicit voluntary participation instead of providing stable mandates. That open-ended approach can be attributed to the reversal of NIST’s prior efforts to promulgate federal software standards during the 1970s and 1980s. The failure of those federal regulatory efforts highlights fundamental challenges inherent in software development that continue to persist today. Policymakers should draw upon the lessons of NIST’s experience and recognize that federal standards are unlikely to be the silver bullet. Instead, they should heed NIST’s admonition that the practice of software development remains deeply fragmented for other intrinsic reasons. Any effort to establish a universal standard of care must grapple with the need to accommodate the broad heterogeneity of accepted practices in the field. 
    more » « less
  4. The pursuit of software safety standards has stalled. In response, commentators and policymakers have looked increasingly to federal agencies to deliver new hope. Some place their faith in existing agencies while others propose a new super agency to oversee software-specific issues. This turn reflects both optimism in the agency model as well as pessimism in other institutions such as the judiciary or private markets. This Essay argues that the agency model is not a silver bullet. Applying a comparative institutional choice lens, this Essay explains that the characteristic strengths of the agency model—expertise, uniformity, and efficiency—offer less advantage than one might expect in the software domain. Because software complexity exceeds the capacity of software expertise, software experts have been unable to devise standards that meaningfully assure safety. That root limitation is unlikely to change by amassing more software experts in a central agency. This Essay argues further that the institutional choice literature should embrace an information-centered approach, rather than a participation-centered approach, when confronting an area of scientific impotence. While participation is a useful proxy when each stakeholder has relevant information to contribute, it loses its efficacy when the complexity of the problem escapes the ability of the participants. Instead, the focus should shift to constructing an empirical body of knowledge regarding the norms and customary practices in the field. 
    more » « less